The Minimal Error Conjugate Gradient Method Is a Regularization Method
نویسندگان
چکیده
The regularizing properties of the conjugate gradient iteration, applied to the normal equation of a linear ill-posed problem, were established by Nemirovskii in 1986. A seemingly more attractive variant of this algorithm is the minimal error method suggested by King. The present paper analyzes the regularizing properties of the minimal error method. It is shown that the discrepancy principle is no regularizing stopping rule for the minimal error method. Instead, a different stopping rule is suggested which leads to order-optimal convergence rates.
منابع مشابه
Optimum Shape Design of a Radiant Oven by the Conjugate Gradient Method and a Grid Regularization Approach
This study presents an optimization problem for shape design of a 2-D radiant enclosure with transparent medium and gray-diffuse surfaces. The aim of the design problem is to find the optimum geometry of a radiant enclosure from the knowledge of temperature and heat flux over some parts of boundary surface, namely the design surface. The solution of radiative heat transfer is based on the net r...
متن کاملکنترل بهینة شار حرارتی سطحی در یک جسم دوبعدی با خواص وابسته به دما
In this paper the optimal control of boundary heat flux in a 2-D solid body with an arbitrary shape is performed in order to achieve the desired temperature distribution at a given time interval. The boundary of the body is subdivided into a number of components. On each component a time-dependent heat flux is applied which is independent of the others. Since the thermophysical properties are t...
متن کاملImage Appraisal for 2D and 3D Electromagnetic Inversion ~
Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and model covariance matrices can be directly calculated. The columns of the model resolution matrix are shown to yield empirical estimates of the ho...
متن کاملAdaptive regularization of neural networks using conjugate gradient
Recently we suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique. Numerical experiments with feed-forward neural networks successfully demonstrate improved generalization ability and lower computational cost.
متن کاملExtrapolation vs. projection methods for linear systems of equations
It is shown that the four vector extrapolation methods, minimal polynomial extrapolation, reduced rank extrapolation, modified minimal polynomial extrapolation, and topological epsilon algorithm, when applied to linearly generated vector sequences, are Krylov subspace methods, and are equivalent to some well known conjugate gradient type methods. A unified recursive method that includes the con...
متن کامل